Skip to main content

Market Adapter

Market adapters function as the first step of two steps for getting data into the Energyworx Platform. This is accomplished by reading the data of the file it is subjected to and creating an output that can be interpreted by the Platform in Python to achieve the ultimate goal of bringing data into the system.

The Market Adapter configuration allows the user to upload a Python code that will be executed in runtime to perform all the normalizing data processes. However, the platform also provides some generic Market Adapters that can be used to read more common file types, such as:

file typetechnical name
CSVcsv
Excelexcel
JSON/JSONPjson_market_adapter
XMLxml

The configuration name of the Market Adapter should be unique, but you can use the above base Market Adapters in as many configurations as you may see fit.

Market Adapter configuration

Market Adapter management can be found at [Smart Integration -> Market Adapters]. Go through the following steps in order to create, edit or copy a Market Adapter. For creating a new Market Adapter, the menu will look as follows.

  1. Click the [Create] button.
  2. Fill in the Name, Technical name, and Version fields (The Description field is optional).
  3. Note: The Technical name needs to match the Market Adapters file name in the code. The Version needs to match the Market Adapters class name in the code, which is either V1 or V2 in a lot of cases.
  4. Click the [Choose a Transformation Configuration] hyperlink to assign a Transformation Configuration. A new window will open.
  5. Use the Search field to narrow down your search if necessary and click [Select].
  6. Note: Only the properties that are available in the Market Adapters code can be added.
  7. Click the [+ Add] button in the Properties section.
  8. Fill in the Keys and Values fields.
  9. Click [Save] when you’re done.

In addition to all the fields you previously filled in, the Market Adapter is also given a unique ID.

When you need to edit an already existing Market Adapter, go to the Market Adapter overview. This will look as follows.

  1. Click the [Details] button. You are now redirected to the Market Adapter page.
  2. Click the [Edit] button.
  3. Make your desired changes and click [Save] when you’re done.

In some cases it can be useful to copy an already existing Market Adapter and make the necessary changes from there on.

  1. Go to the overview and click the [Details] button of the Market Adapter you want to copy.
  2. Simply click [Copy]. You are now redirected to the Market Adapter page. All the available fields are already filled in.
  3. Make your desired changes and click [Save] when you’re done.

Implementation

In addition to the provided basic Market Adapters customers can also implement their own Market Adapter in python code. You can see how to write code for Market Adapter in How to write a Market Adapter

Automated MA Assignment

The platform can automatically assign a Market Adapter (MA) to incoming files as soon as they land in cloud storage. This feature enables fully automated, hands-free data ingestion based on the file's payload type.

Configuration

Automated MA assignment is configured per namespace using the namespace propertymarket_adapters. This can be set by going to Admininstrator > Namespaces > Details (of a specific namespace)

The namespace property can only be set by platform admins (until 25.04 version, only available to energyworx accounts, must be requested via service desk) The property define which market adapter to use for a given payload, the format is as follows:

{
"Payload1": 6419342926311424,
"Payload2": 9876543210123456,
"Payload3": MA_id,
}

🔒 Platform Restrictions Setting the namespace properties can only be done by Platform Administrators.

Availability Note: Prior to version 25.04, this feature was exclusively available to Energyworx accounts and required a request via the service desk. This restriction has been lifted starting from 25.04

Generic Market Adapter Properties

Each generic Market Adapter accepts a set of properties that can be configured in the Properties section of the Market Adapter configuration. The available properties depend on the technical name and version selected.

CSV (csv)

Version V1

PropertyTypeDefaultDescription
split_linesint10000000Maximum number of data rows per chunk. Only used when no datasource splitting is configured.
separatorstrauto-detectedColumn delimiter character (e.g. ,, ;, \t, |, ~). Required when datasource_id_columns or datasource_id_columns_indices is set.
datasource_id_columnsstrComma-separated column name(s) that together form the datasource ID. Use when the file has headers and contains data for multiple datasources. Cannot be combined with datasource_id_columns_indices.
datasource_id_columns_indicesstrComma-separated zero-based column indices that form the datasource ID. Use when the file has no headers. Cannot be combined with datasource_id_columns.
has_headerTrue/FalseTrueWhether the file contains a header row. Must be False when using datasource_id_columns_indices.
index_columnsstrComma-separated column names to use as index columns.
channels_columnstrName of the column containing channel names in a long-format CSV (one row per channel reading). Requires datasource_id_columns.

Version V2

V2 reads the file via pandas.read_csv and supports transformation steps. All properties are passed directly to pandas.

PropertyTypeDefaultDescription
filename_regexstrRegular expression the incoming filename must match. Files that do not match are rejected with an error.
stepsstrComma-separated list of post-read transformation steps to apply to the DataFrame. Supported values: melt, groupby, transpose.
sepstr,Column separator.
delimiterstrAlias for sep.
headerint or strinferRow number(s) to use as column names.
namesstrComma-separated list of column names to assign (must match the number of columns in the file).
index_colstrComma-separated column index/name(s) to use as the row index.
usecolsstrSubset of columns to read.
dtypestrData type for all or specific columns.
enginestrParser engine (c, python).
true_valuesstrComma-separated values to interpret as True.
false_valuesstrComma-separated values to interpret as False.
skipinitialspaceTrue/FalseFalseSkip whitespace after the delimiter.
skiprowsstrComma-separated row indices to skip.
skipfooterint0Number of rows to skip at the end of the file.
nrowsintMaximum number of rows to read.
na_valuesstrComma-separated additional strings to treat as NaN.
keep_default_naTrue/FalseTrueWhether to include the default set of NaN values.
na_filterTrue/FalseTrueDetect missing values. Disable for a small performance gain on files with no missing data.
skip_blank_linesTrue/FalseTrueSkip blank lines rather than interpreting them as NaN rows.
parse_datesstrComma-separated column names/indices to parse as dates.
infer_datetime_formatTrue/FalseFalseInfer the datetime format for faster parsing.
dayfirstTrue/FalseFalseInterpret dates as day-first (e.g. 01/02/2020 → February 1).
keep_date_colTrue/FalseFalseKeep the original date column after parsing.
compressionstrinferDecompression format: infer, gzip, bz2, zip, xz.
thousandsstrThousands separator character.
decimalstr.Decimal separator character.
lineterminatorstrLine break character.
quotecharstr"Character used to denote the start and end of a quoted item.
quotingint0Quoting mode (maps to Python csv.QUOTE_* constants).
doublequoteTrue/FalseTrueInterpret two consecutive quote characters as a single quote.
escapecharstrCharacter used to escape the delimiter.
commentstrCharacter that marks the rest of a line as a comment.
encodingstrFile encoding (e.g. utf-8, latin-1).
dialectstrCSV dialect to use.
delim_whitespaceTrue/FalseFalseUse whitespace as the delimiter.
low_memoryTrue/FalseTrueProcess the file in chunks to reduce memory usage.
memory_mapTrue/FalseFalseMap the file directly into memory.
float_precisionstrFloating-point converter to use (high, legacy, round_trip).

Excel (excel)

Version XLSX

PropertyTypeDefaultDescription
split_linesint10000000Maximum number of data rows per chunk. Only used when no datasource splitting is configured.
sheet_rangestr(all sheets)Range of sheet indices to process, e.g. 0-12, 1-, -5. Leave empty to process all sheets.
headerint11-based row number of the header. All rows above it are ignored.
datasource_id_columnsstrComma-separated column name(s) that form the datasource ID. Use when the file contains data for multiple datasources.
index_columnsstrComma-separated column names to use as index columns.
channels_columnstrColumn containing channel names in a long-format file. Requires datasource_id_columns.

Version V1

V1 reads the file via pandas.read_excel and supports transformation steps.

PropertyTypeDefaultDescription
stepsstrComma-separated list of post-read transformation steps: melt, groupby, transpose.
sheet_nameint or str0Sheet index or name to read. Comma-separated for multiple sheets.
headerint0Row number(s) to use as column names (0-indexed).
namesstrComma-separated list of column names to assign.
index_colstrComma-separated column index/name(s) to use as the row index.
dtypestrData type for all or specific columns.
enginestrExcel parsing engine (openpyxl, xlrd, odf).
true_valuesstrComma-separated values to interpret as True.
false_valuesstrComma-separated values to interpret as False.
skiprowsstrComma-separated row indices to skip.
nrowsintMaximum number of rows to read.
na_valuesstrComma-separated additional strings to treat as NaN.
keep_default_naTrue/FalseTrueWhether to include the default set of NaN values.
verboseTrue/FalseFalseLog verbose output.
parse_datesstrComma-separated column indices to parse as dates.
thousandsstrThousands separator character.
commentstrCharacter that marks the rest of a line as a comment.
skipfooterint0Number of rows to skip at the end of the sheet.

JSON (json_market_adapter)

Version V1

PropertyTypeDefaultDescription
json_process_levelstrKey path within the JSON structure on which to split the file. The value at that key must be a list — each list item becomes a separate element passed to the transform step. If not set, the entire JSON object is passed through as a single element.

Note: JSON keys in the ingested file must not contain the strings \_, \:, or \/ as these are used internally by the adapter during processing.